# Leading in multi-domain benchmarks
Deepseek R1 0528 Qwen3 8B GPTQ Int4 Int8Mix
MIT
A quantized version model developed based on DeepSeek-R1-0528-Qwen3-8B, with significant improvements in reasoning ability and reduction of hallucination rate, suitable for various natural language processing tasks.
Large Language Model
Transformers

D
QuantTrio
154
1
Kunoichi DPO V2 7B GGUF Imatrix
A 7B-parameter large language model based on the Mistral architecture, trained with DPO (Direct Preference Optimization), demonstrating excellent performance in multiple benchmarks
Large Language Model
K
Lewdiculous
3,705
39
Featured Recommended AI Models